2,895 research outputs found

    3D Eddy-Current Imaging of Metal Tubes by Gradient-Based, Controlled Evolution of Level Sets

    Get PDF
    9 pagesInternational audienceEddy-current non-destructive testing is widely used to detect defects within a metal structure. It is also useful to characterize their location and shape provided that proper maps of variations of impedance which the defects induce are available. Imaging of void defects in the wall of a hollow, non-magnetic metal tube, is performed herein by controlled evolution of level sets. Such data are variations of impedance collected by a circular probe array close to the inner surface of the tube when a coil source operated at one single frequency is set along its axis at some distance from the array, both receiver and coil source being moved simultaneously. The defect zone is represented in implicit fashion as a zero level set, amenable to topological changes via a nonlinear iterative method that minimizes a least-square cost functional made of the difference between the measured (computer simulated) and model data. The procedure involves the rigorous calculation of the gradient of the variations of impedance, in the case of a multi-static configuration, a vector domain integral field formulation being used to that effect. Numerical examples, via a dedicated extension of the general-purpose CIVA platform, exhibit pros and cons of the approach for inner, outer, and through-wall void defects, with further comparisons to results provided by an independently-developed binary-specialized method

    A Brief Measure for the Assessment of Competence in Coping With Death: The Coping With Death Scale Short Version.

    Get PDF
    Context. The coping with death competence is of great importance for palliative care professionals, who face daily exposure to death. It can keep them from suffering compassion fatigue and burnout, thus enhancing the quality of the care provided. Despite its relevance, there are only two measures of professionals’ ability to cope with death. Specifically, the Coping with Death Scale (CDS) has repeatedly shown psychometric problems with some of its items. Objective. The aim of this study was to develop and validate a short version of the CDS. Methods. Nine items from the original CDS were chosen for the short version. Two cross-sectional surveys were conducted in Spanish (N ¼ 385) and Argentinian (N ¼ 273) palliative care professionals. The CDS and the Professional Quality of Life Scale were used in this study. Statistical analyses included two confirmatory factor analyses (CFAs), followed by a standard measurement invariance routine. Reliability estimates and evidence of validity based on relations with other measures were also gathered. Results. CFA models had excellent fit in both the Spanish (c2(27) ¼ 107.043, P < 0.001; Comparative Fit Index [CFI] ¼ 0.978; Tucker-Lewis Index [TLI] ¼ 0.970; Root Mean Square Error of Approximation [RMSEA] ¼ 0.093 [0.075, 0.112]; Standardized Root Mean Square Residual ¼ 0.030) and Argentinian (c2(27) ¼ 102.982, P < 0.001; CFI ¼ 0.963; TLI ¼ 0.950; RMSEA ¼ 0.106 [0.085, 0.128]) samples. A standard measurement invariance routine was carried out. The most parsimonious model (c2(117) ¼ 191.738, P < 0.001; CFI ¼ 0.987; TLI ¼ 0.992; RMSEA ¼ 0.046 [0.034, 0.058]; Standardized Root Mean Square Residual ¼ 0.043) offered evidence of invariance across countries, with no latent mean differences. Evidence of reliability and evidence of validity based on relations with other measures were also appropriate. Conclusion. Results indicated the psychometric boundaries of the short version of the CDS.post-print172 K

    PFEM-based modeling of industrial granular flows

    Get PDF
    The potential of numerical methods for the solution and optimization of industrial granular flows problems is widely accepted by the industries of this field, the challenge being to promote effectively their industrial practice. In this paper, we attempt to make an exploratory step in this regard by using a numerical model based on continuous mechanics and on the so-called Particle Finite Element Method (PFEM). This goal is achieved by focusing two specific industrial applications in mining industry and pellet manufacturing: silo discharge and calculation of power draw in tumbling mills. Both examples are representative of variations on the granular material mechanical response&mdash;varying from a stagnant configuration to a flow condition. The silo discharge is validated using the experimental data, collected on a full-scale flat bottomed cylindrical silo. The simulation is conducted with the aim of characterizing and understanding the correlation between flow patterns and pressures for concentric discharges. In the second example, the potential of PFEM as a numerical tool to track the positions of the particles inside the drum is analyzed. Pressures and wall pressures distribution are also studied. The power draw is also computed and validated against experiments in which the power is plotted in terms of the rotational speed of the drum

    Núcleos de álgebra energéticamente eficientes en FPGA para computación de altas prestaciones

    Get PDF
    The dissemination of multi-core architectures and the later irruption of massively parallel devices, led to a revolution in High-Performance Computing (HPC) platforms in the last decades. As a result, Field- Programmable Gate Arrays (FPGAs) are re-emerging as a versatile and more energy-efficient alternative to other platforms. Traditional FPGA design implies using low-level Hardware Description Languages (HDL) such as VHDL or Verilog, which follow an entirely different programming model than standard software languages, and their use requires specialized knowledge of the underlying hardware. In the last years, manufacturers started to make big efforts to provide High-Level Synthesis (HLS) tools, in order to allow a grater adoption of FPGAs in the HPC coimnunity. Our work studies the use of multi-core hardware and different FPGAs to address Numerical Linear Algebra (NLA) kernels such as the general matrix multiplication (GEMM) and the sparse matrix-vector multiplication (SpMV). Specifically, we compare the behavior of fine-tuned kernels in a multi-core CPU processor and HLS implementations on FPGAs. We perform the experimental evaluation of our implementations on a low-end and a cutting-edge FPGA platform, in terms of runtime and energy consumption, and compare the results against the Intel MKL library in CPU.La masificación de arquitecturas de multinúcleo y la posterior irrupción de dispositivos masivamente paralelos produjeron una revolución en las plataformas de computación de altas prestaciones. Como resultado, las FPGAs (del inglés, Field-Programmable Gate Arrays) están resurgiendo como una alternativa versátil y más eficiente desde el punto de vista energético. El flujo de diseño tradicional en FPGAs implica el uso de lenguajes de descripción de hardware de bajo nivel, como VHDL o Verilog, que siguen un modelo de programación completamente diferente al de los lenguajes de software estándar, y su uso requiere un conocimiento especializado del hardware subyacente. En los últimos años, los fabricantes comenzaron a hacer grandes esfuerzos para proporcionar herramientas de síntesis de alto nivel, con el fin de permitir una mayor adopción de las FPGAs en la comunidad de computación de altas prestaciones. Nuestro trabajo estudia el uso de plataformas multinúcleo y diferentes FPGAs para abordar problemas de álgebra lineal numérica (NLA) como la multiplicación de matrices (GEMM) y la multiplicación de matriz dispersa por vector (SpMV). Específicamente, comparamos el comportamiento de implementaciónes optimizadas para un procesador multinúcleo y las im- plementaciones con síntesis de alto nivel en FPGAs. Realizamos la evaluación experimental de nuestras im- plementaciones en una plataforma FPGA de gama baja y otra de gama alta, analizando tiempo de ejecución y consumo de energía, y comparamos los resultados con la biblioteca Intel MKL para CPU.Facultad de Informátic

    Effects of caffeine supplementation on physical performance and mood dimensions in elite and trained-recreational athletes.

    Get PDF
    Background: Caffeine supplementation (CAFF) has an established ergogenic effect on physical performance and the psychological response to exercise. However, few studies have compared the response to CAFF intake among athletes of different competition level. This study compares the acute effects of CAFF on anaerobic performance, mood and perceived effort in elite and moderately-trained recreational athletes. Methods: Participants for this randomized, controlled, crossover study were 8 elite athletes (in the senior boxing national team) and 10 trained-recreational athletes. Under two experimental conditions, CAFF supplementation (6 mg/kg) or placebo (PLAC), the athletes completed a Wingate test. Subjective exertion during the test was recorded as the rating of perceived exertion (RPE) both at the general level (RPEgeneral) and at the levels muscular (RPEmuscular) and cardiorespiratory (RPEcardio). Before the Wingate test, participants completed the questionnaires Profiles of Moods States (POMS) and Subjective Vitality Scale (SVS). Results: In response to CAFF intake, improvements were noted in Wpeak (11.22 ± 0.65 vs 10.70 ± 0.84; p = 0.003; η2 p =0.44), Wavg (8.75 ± 0.55 vs 8.41 0.46; p = 0.001; η2 p =0.53) and time taken to reach Wpeak (7.56 ± 1.58 vs 9.11 ± 1.53; p < 0.001; η2 p =0.57) both in the elite and trained-recreational athletes. However, only the elite athletes showed significant increases in tension (+ 325%), vigor (+ 31%) and SVS (+ 28%) scores after the intake of CAFF compared to levels recorded under the condition PLAC (p < 0.05). Similarly, levels of vigor after consuming CAFF were significantly higher in the elite than the trained-recreational athletes (+ 5.8%). Conclusions: CAFF supplementation improved anaerobic performance in both the elite and recreational athletes. However, the ergogenic effect of CAFF on several mood dimensions and subjective vitality was greater in the elite athletes.post-print700 K

    Photodetection of propagating quantum microwaves in circuit QED

    Get PDF
    We develop the theory of a metamaterial composed of an array of discrete quantum absorbers inside a one-dimensional waveguide that implements a high-efficiency microwave photon detector. A basic design consists of a few metastable superconducting nanocircuits spread inside and coupled to a one-dimensional waveguide in a circuit QED setup. The arrival of a {\it propagating} quantum microwave field induces an irreversible change in the population of the internal levels of the absorbers, due to a selective absorption of photon excitations. This design is studied using a formal but simple quantum field theory, which allows us to evaluate the single-photon absorption efficiency for one and many absorber setups. As an example, we consider a particular design that combines a coplanar coaxial waveguide with superconducting phase qubits, a natural but not exclusive playground for experimental implementations. This work and a possible experimental realization may stimulate the possible arrival of "all-optical" quantum information processing with propagating quantum microwaves, where a microwave photodetector could play a key role.Comment: 27 pages, submitted to Physica Scripta for Nobel Symposium on "Qubits for Quantum Information", 200

    Effect of magnetically simulated zero-gravity and enhanced gravity on the walk of the common fruitfly†

    Get PDF
    Understanding the effects of gravity on biological organisms is vital to the success of future space missions. Previous studies in Earth orbit have shown that the common fruitfly (Drosophila melanogaster) walks more quickly and more frequently in microgravity, compared with its motion on Earth. However, flight preparation procedures and forces endured on launch made it difficult to implement on the Earth's surface a control that exposed flies to the same sequence of major physical and environmental changes. To address the uncertainties concerning these behavioural anomalies, we have studied the walking paths of D. melanogaster in a pseudo-weightless environment (0g*) in our Earth-based laboratory. We used a strong magnetic field, produced by a superconducting solenoid, to induce a diamagnetic force on the flies that balanced the force of gravity. Simultaneously, two other groups of flies were exposed to a pseudo-hypergravity environment (2g*) and a normal gravity environment (1g*) within the spatially varying field. The flies had a larger mean speed in 0g* than in 1g*, and smaller in 2g*. The mean square distance travelled by the flies grew more rapidly with time in 0g* than in 1g*, and slower in 2g*. We observed no other clear effects of the magnetic field, up to 16.5 T, on the walks of the flies. We compare the effect of diamagnetically simulated weightlessness with that of weightlessness in an orbiting spacecraft, and identify the cause of the anomalous behaviour as the altered effective gravity

    Prediction of thrombo-embolic risk in patients with hypertrophic cardiomyopathy (HCM Risk-CVA)

    Get PDF
    Aims Atrial fibrillation (AF) and thrombo-embolism (TE) are associated with reduced survival in hypertrophic cardiomyopathy (HCM), but the absolute risk of TE in patients with and without AF is unclear. The primary aim of this study was to derive and validate a model for estimating the risk of TE in HCM. Exploratory analyses were performed to determine predictors of TE, the performance of the CHA2DS2-VASc score, and outcome with vitamin K antagonists (VKAs). Methods and results A retrospective, longitudinal cohort of seven institutions was used to develop multivariable Cox regression models fitted with pre-selected predictors. Bootstrapping was used for validation. Of 4821 HCM patients recruited between 1986 and 2008, 172 (3.6%) reached the primary endpoint of cerebrovascular accident (CVA), transient ischaemic attack (TIA), or systemic peripheral embolus within 10 years. A total of 27.5% of patients had a CHA2DS2-VASc score of 0, of whom 9.8% developed TE during follow-up. Cox regression revealed an association between TE and age, AF, the interaction between age and AF, TE prior to first evaluation, NYHA class, left atrial (LA) diameter, vascular disease, and maximal LV wall thickness. There was a curvilinear relationship between LA size and TE risk. The model predicted TE with a C-index of 0.75 [95% confidence interval (CI) 0.70-0.80] and the D-statistic was 1.30 (95% CI 1.05-1.56). VKA treatment was associated with a 54.8% (95% CI 31-97%, P = 0.037) relative risk reduction in HCM patients with AF. Conclusions The study shows that the risk of TE in HCM patients can be identified using a small number of simple clinical features. LA size, in particular, should be monitored closely, and the assessment and treatment of conventional vascular risk factors should be routine practice in older patients. Exploratory analyses show for the first time evidence for a reduction of TE with VKA treatment. The CHA2DS2-VASc score does not appear to correlate well with the clinical outcome in patients with HCM and should not be used to assess TE risk in this population
    corecore